Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Psychon Bull Rev ; 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38689188

RESUMEN

While the neural bases of the earliest stages of speech categorization have been widely explored using neural decoding methods, there is still a lack of consensus on questions as basic as how wordforms are represented and in what way this word-level representation influences downstream processing in the brain. Isolating and localizing the neural representations of wordform is challenging because spoken words activate a variety of representations (e.g., segmental, semantic, articulatory) in addition to form-based representations. We addressed these challenges through a novel integrated neural decoding and effective connectivity design using region of interest (ROI)-based, source-reconstructed magnetoencephalography/electroencephalography (MEG/EEG) data collected during a lexical decision task. To identify wordform representations, we trained classifiers on words and nonwords from different phonological neighborhoods and then tested the classifiers' ability to discriminate between untrained target words that overlapped phonologically with the trained items. Training with word neighbors supported significantly better decoding than training with nonword neighbors in the period immediately following target presentation. Decoding regions included mostly right hemisphere regions in the posterior temporal lobe implicated in phonetic and lexical representation. Additionally, neighbors that aligned with target word beginnings (critical for word recognition) supported decoding, but equivalent phonological overlap with word codas did not, suggesting lexical mediation. Effective connectivity analyses showed a rich pattern of interaction between ROIs that support decoding based on training with lexical neighbors, especially driven by right posterior middle temporal gyrus. Collectively, these results evidence functional representation of wordforms in temporal lobes isolated from phonemic or semantic representations.

2.
bioRxiv ; 2023 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-37503242

RESUMEN

While the neural bases of the earliest stages of speech categorization have been widely explored using neural decoding methods, there is still a lack of consensus on questions as basic as how wordforms are represented and in what way this word-level representation influences downstream processing in the brain. Isolating and localizing the neural representations of wordform is challenging because spoken words evoke activation of a variety of representations (e.g., segmental, semantic, articulatory) in addition to form-based representations. We addressed these challenges through a novel integrated neural decoding and effective connectivity design using region of interest (ROI)-based, source reconstructed magnetoencephalography/electroencephalography (MEG/EEG) data collected during a lexical decision task. To localize wordform representations, we trained classifiers on words and nonwords from different phonological neighborhoods and then tested the classifiers' ability to discriminate between untrained target words that overlapped phonologically with the trained items. Training with either word or nonword neighbors supported decoding in many brain regions during an early analysis window (100-400 ms) reflecting primarily incremental phonological processing. Training with word neighbors, but not nonword neighbors, supported decoding in a bilateral set of temporal lobe ROIs, in a later time window (400-600 ms) reflecting activation related to word recognition. These ROIs included bilateral posterior temporal regions implicated in wordform representation. Effective connectivity analyses among regions within this subset indicated that word-evoked activity influenced the decoding accuracy more than nonword-evoked activity did. Taken together, these results evidence functional representation of wordforms in bilateral temporal lobes isolated from phonemic or semantic representations.

3.
Lang Cogn Neurosci ; 38(6): 765-778, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37332658

RESUMEN

Generativity, the ability to create and evaluate novel constructions, is a fundamental property of human language and cognition. The productivity of generative processes is determined by the scope of the representations they engage. Here we examine the neural representation of reduplication, a productive phonological process that can create novel forms through patterned syllable copying (e.g. ba-mih → ba-ba-mih, ba-mih-mih, or ba-mih-ba). Using MRI-constrained source estimates of combined MEG/EEG data collected during an auditory artificial grammar task, we identified localized cortical activity associated with syllable reduplication pattern contrasts in novel trisyllabic nonwords. Neural decoding analyses identified a set of predominantly right hemisphere temporal lobe regions whose activity reliably discriminated reduplication patterns evoked by untrained, novel stimuli. Effective connectivity analyses suggested that sensitivity to abstracted reduplication patterns was propagated between these temporal regions. These results suggest that localized temporal lobe activity patterns function as abstract representations that support linguistic generativity.

4.
Front Artif Intell ; 6: 1062230, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37051161

RESUMEN

Introduction: The notion of a single localized store of word representations has become increasingly less plausible as evidence has accumulated for the widely distributed neural representation of wordform grounded in motor, perceptual, and conceptual processes. Here, we attempt to combine machine learning methods and neurobiological frameworks to propose a computational model of brain systems potentially responsible for wordform representation. We tested the hypothesis that the functional specialization of word representation in the brain is driven partly by computational optimization. This hypothesis directly addresses the unique problem of mapping sound and articulation vs. mapping sound and meaning. Results: We found that artificial neural networks trained on the mapping between sound and articulation performed poorly in recognizing the mapping between sound and meaning and vice versa. Moreover, a network trained on both tasks simultaneously could not discover the features required for efficient mapping between sound and higher-level cognitive states compared to the other two models. Furthermore, these networks developed internal representations reflecting specialized task-optimized functions without explicit training. Discussion: Together, these findings demonstrate that different task-directed representations lead to more focused responses and better performance of a machine or algorithm and, hypothetically, the brain. Thus, we imply that the functional specialization of word representation mirrors a computational optimization strategy given the nature of the tasks that the human brain faces.

5.
Cognition ; 230: 105322, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36370613

RESUMEN

Acceptability judgments are a primary source of evidence in formal linguistic research. Within the generative linguistic tradition, these judgments are attributed to evaluation of novel forms based on implicit knowledge of rules or constraints governing well-formedness. In the domain of phonological acceptability judgments, other factors including ease of articulation and similarity to known forms have been hypothesized to influence evaluation. We used data-driven neural techniques to identify the relative contributions of these factors. Granger causality analysis of magnetic resonance imaging (MRI)-constrained magnetoencephalography (MEG) and electroencephalography (EEG) data revealed patterns of interaction between brain regions that support explicit judgments of the phonological acceptability of spoken nonwords. Comparisons of data obtained with nonwords that varied in terms of onset consonant cluster attestation and acceptability revealed different cortical regions and effective connectivity patterns associated with phonological acceptability judgments. Attested forms produced stronger influences of brain regions implicated in lexical representation and sensorimotor simulation on acoustic-phonetic regions, whereas unattested forms produced stronger influence of phonological control mechanisms on acoustic-phonetic processing. Unacceptable forms produced widespread patterns of interaction consistent with attempted search or repair. Together, these results suggest that speakers' phonological acceptability judgments reflect lexical and sensorimotor factors.


Asunto(s)
Juicio , Fonética , Humanos , Magnetoencefalografía , Mapeo Encefálico , Electroencefalografía
6.
Front Psychol ; 12: 590155, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33776832

RESUMEN

Processes governing the creation, perception and production of spoken words are sensitive to the patterns of speech sounds in the language user's lexicon. Generative linguistic theory suggests that listeners infer constraints on possible sound patterning from the lexicon and apply these constraints to all aspects of word use. In contrast, emergentist accounts suggest that these phonotactic constraints are a product of interactive associative mapping with items in the lexicon. To determine the degree to which phonotactic constraints are lexically mediated, we observed the effects of learning new words that violate English phonotactic constraints (e.g., srigin) on phonotactic perceptual repair processes in nonword consonant-consonant-vowel (CCV) stimuli (e.g., /sre/). Subjects who learned such words were less likely to "repair" illegal onset clusters (/sr/) and report them as legal ones (/∫r/). Effective connectivity analyses of MRI-constrained reconstructions of simultaneously collected magnetoencephalography (MEG) and EEG data showed that these behavioral shifts were accompanied by changes in the strength of influences of lexical areas on acoustic-phonetic areas. These results strengthen the interpretation of previous results suggesting that phonotactic constraints on perception are produced by top-down lexical influences on speech processing.

7.
Brain Lang ; 170: 12-17, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28364641

RESUMEN

In this paper we demonstrate the application of new effective connectivity analyses to characterize changing patterns of task-related directed interaction in large (25-55 node) cortical networks following the onset of aphasia. The subject was a left-handed woman who became aphasic following a right-hemisphere stroke. She was tested on an auditory word-picture verification task administered one and seven months after the onset of aphasia. MEG/EEG and anatomical MRI data were used to create high spatiotemporal resolution estimates of task-related cortical activity. Effective connectivity analyses of those data showed a reduction of bilateral network influences on preserved right-hemisphere structures, and an increase in intra-hemispheric left-hemisphere influences. She developed a connectivity pattern that was more left lateralized than that of right-handed control subjects. Her emergent left hemisphere network showed a combination of increased functional subdivision of perisylvian language areas and recruitment of medial structures.


Asunto(s)
Afasia/etiología , Afasia/fisiopatología , Lateralidad Funcional/fisiología , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/fisiopatología , Electroencefalografía , Femenino , Humanos , Lenguaje , Imagen por Resonancia Magnética , Magnetoencefalografía , Persona de Mediana Edad , Recuperación de la Función
8.
Lang Cogn Neurosci ; 31(7): 841-855, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27595118

RESUMEN

Sentential context influences the way that listeners identify phonetically ambiguous or perceptual degraded speech sounds. Unfortunately, inherent inferential limitations on the interpretation of behavioral or BOLD imaging results make it unclear whether context influences perceptual processing directly, or acts at a post-perceptual decision stage. In this paper, we use Kalman-filter enabled Granger causation analysis of MR-constrained MEG/EEG data to distinguish between these possibilities. Using a retrospective probe verification task, we found that sentential context strongly affected the interpretation of words with ambiguous initial voicing (e.g. DUSK-TUSK). This behavioral context effect coincided with increased influence by brain regions associated with lexical representation on regions associated with acoustic-phonetic processing. These results support an interactive view of sentence context effects on speech perception.

9.
Psychol Sci ; 27(7): 1019-26, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-27154551

RESUMEN

When participants search for a target letter while reading for comprehension, they miss more instances if the target letter is embedded in frequent function words than in less frequent content words. This phenomenon, called the missing-letter effect, has been considered a window on the cognitive mechanisms involved in the visual processing of written language. In the present study, one group of participants read two texts for comprehension while searching for a target letter, and another group listened to a narration of the same two texts while listening for the target letter's corresponding phoneme. The ubiquitous missing-letter effect was replicated and extended to a missing-phoneme effect Item-based correlations between the reading and listening tasks were high, which led us to conclude that both tasks involve cognitive processes that reading and listening have in common and that both processes are rooted in psycholinguistically driven allocation of attention.


Asunto(s)
Comprensión/fisiología , Reconocimiento Visual de Modelos/fisiología , Lectura , Percepción del Habla/fisiología , Adulto , Femenino , Humanos , Masculino , Psicolingüística , Adulto Joven
11.
J Mem Lang ; 82: 41-55, 2015 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-25883413

RESUMEN

Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.

12.
PLoS One ; 9(1): e86212, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24465965

RESUMEN

Listeners show a reliable bias towards interpreting speech sounds in a way that conforms to linguistic restrictions (phonotactic constraints) on the permissible patterning of speech sounds in a language. This perceptual bias may enforce and strengthen the systematicity that is the hallmark of phonological representation. Using Granger causality analysis of magnetic resonance imaging (MRI)-constrained magnetoencephalography (MEG) and electroencephalography (EEG) data, we tested the differential predictions of rule-based, frequency-based, and top-down lexical influence-driven explanations of processes that produce phonotactic biases in phoneme categorization. Consistent with the top-down lexical influence account, brain regions associated with the representation of words had a stronger influence on acoustic-phonetic regions in trials that led to the identification of phonotactically legal (versus illegal) word-initial consonant clusters. Regions associated with the application of linguistic rules had no such effect. Similarly, high frequency phoneme clusters failed to produce stronger feedforward influences by acoustic-phonetic regions on areas associated with higher linguistic representation. These results suggest that top-down lexical influences contribute to the systematicity of phonological representation.


Asunto(s)
Encéfalo/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Mapeo Encefálico/métodos , Electroencefalografía/métodos , Humanos , Lenguaje , Lingüística/métodos , Imagen por Resonancia Magnética/métodos , Magnetoencefalografía/métodos , Fonética
13.
Brain Lang ; 121(3): 273-88, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22498237

RESUMEN

Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing.


Asunto(s)
Corteza Cerebral/fisiología , Lenguaje , Percepción del Habla/fisiología , Humanos , Fonética
14.
Front Psychol ; 3: 506, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23293611

RESUMEN

Granger causation analysis of high spatiotemporal resolution reconstructions of brain activation offers a new window on the dynamic interactions between brain areas that support language processing. Premised on the observation that causes both precede and uniquely predict their effects, this approach provides an intuitive, model-free means of identifying directed causal interactions in the brain. It requires the analysis of all non-redundant potentially interacting signals, and has shown that even "early" processes such as speech perception involve interactions of many areas in a strikingly large network that extends well beyond traditional left hemisphere perisylvian cortex that play out over hundreds of milliseconds. In this paper we describe this technique and review several general findings that reframe the way we think about language processing and brain function in general. These include the extent and complexity of language processing networks, the central role of interactive processing dynamics, the role of processing hubs where the input from many distinct brain regions are integrated, and the degree to which task requirements and stimulus properties influence processing dynamics and inform our understanding of "language-specific" localized processes.

15.
Brain Lang ; 110(1): 43-8, 2009 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-19356793

RESUMEN

In this work, we apply Granger causality analysis to high spatiotemporal resolution intracranial EEG (iEEG) data to examine how different components of the left perisylvian language network interact during spoken language perception. The specific focus is on the characterization of serial versus parallel processing dependencies in the dominant hemisphere dorsal and ventral speech processing streams. Analysis of iEEG data from a large, 64-electrode grid implanted over the left perisylvian region in a single right-handed patient showed a consistent pattern of direct posterior superior temporal gyrus influence over sites distributed over the entire ventral pathway for words, non-words, and phonetically ambiguous items that could be interpreted either as words or non-words. For the phonetically ambiguous items, this pattern was overlayed by additional dependencies involving the inferior frontal gyrus, which influenced activation measured at electrodes located in both ventral and dorsal stream speech structures. Implications of these results for understanding the functional architecture of spoken language processing and interpreting the role of the posterior superior temporal gyrus in speech perception are discussed.


Asunto(s)
Lóbulo Frontal/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Electrodos Implantados , Electroencefalografía , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética , Masculino , Tiempo , Adulto Joven
16.
Cognition ; 110(2): 222-36, 2009 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-19110238

RESUMEN

The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analyzes of high spatiotemporal resolution neural activation data derived from the integration of magnetic resonance imaging, magnetoencephalography and electroencephalography, to examine the role of lexical and articulatory mediation in listeners' ability to use phonetic context to compensate for place assimilation. Listeners heard two-word phrases such as pen pad and then saw two pictures, from which they had to select the one that depicted the phrase. Assimilation, lexical competitor environment and the phonological validity of assimilation context were all manipulated. Behavioral data showed an effect of context on the interpretation of assimilated segments. Analysis of 40 Hz gamma phase locking patterns identified a large distributed neural network including 16 distinct regions of interest (ROIs) spanning portions of both hemispheres in the first 200 ms of post-assimilation context. Granger analyzes of individual conditions showed differing patterns of causal interaction between ROIs during this interval, with hypothesized lexical and articulatory structures and pathways driving phonetic activation in the posterior superior temporal gyrus in assimilation conditions, but not in phonetically unambiguous conditions. These results lend strong support for the motor theory of speech perception, and clarify the role of lexical mediation in the phonetic processing of assimilated speech.


Asunto(s)
Percepción del Habla/fisiología , Habla/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Psicolingüística , Desempeño Psicomotor/fisiología
17.
Neuroimage ; 43(3): 614-23, 2008 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-18703146

RESUMEN

Behavioral and functional imaging studies have demonstrated that lexical knowledge influences the categorization of perceptually ambiguous speech sounds. However, methodological and inferential constraints have so far been unable to resolve the question of whether this interaction takes the form of direct top-down influences on perceptual processing, or feedforward convergence during a decision process. We examined top-down lexical influences on the categorization of segments in a /s/-/integral/ continuum presented in different lexical contexts to produce a robust Ganong effect. Using integrated MEG/EEG and MRI data we found that, within a network identified by 40 Hz gamma phase locking, activation in the supramarginal gyrus associated with wordform representation influences phonetic processing in the posterior superior temporal gyrus during a period of time associated with lexical processing. This result provides direct evidence that lexical processes influence lower level phonetic perception, and demonstrates the potential value of combining Granger causality analyses and high spatiotemporal resolution multimodal imaging data to explore the functional architecture of cognition.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Electroencefalografía , Interpretación de Imagen Asistida por Computador/métodos , Magnetoencefalografía , Percepción del Habla/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino
18.
Percept Psychophys ; 65(4): 575-90, 2003 May.
Artículo en Inglés | MEDLINE | ID: mdl-12812280

RESUMEN

For listeners to recognize words, they must map temporally distributed phonetic feature cues onto higher order phonological representations. Three experiments are reported that were performed to examine what information listeners extract from assimilated segments (e.g., place-assimilated tokens of cone that resemble comb) and how they interpret it. Experiment 1 employed form priming to demonstrate that listeners activate the underlying form of CONE, but not of its neighbor (COMB). Experiment 2 employed phoneme monitoring to show that the same assimilated tokens facilitate the perception of postassimilation context. Together, the results of these two experiments suggest that listeners recover both the underlying place of the modified item and information about the subsequent item from the same modified segment. Experiment 3 replicated Experiment 1, using different postassimilation contexts to demonstrate that context effects do not reflect familiarity with a given assimilation process. The results are discussed in the context of general auditory grouping mechanisms.


Asunto(s)
Señales (Psicología) , Reconocimiento en Psicología , Percepción del Habla , Vocabulario , Adulto , Femenino , Humanos , Masculino , Fonética , Distribución Aleatoria , Tiempo de Reacción
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...